
Recovering Robust Training of Physics-Informed Deep Operator Networks
Please login to view abstract download link
Physics-informed neural networks (PINNs) [1] and deep operator networks (DeepONets) [2] have shown promise for effectively solving equations modeling physical systems. However, these networks can be difficult or impossible to train accurately for some systems of equations. One way to improve training is through the use of a small amount of data, however, such data is expensive to produce. We will introduce our novel multifidelity framework [3] for stacking PINNs and physics-informed DeepONets [4] that facilitates training by progressively reducing the errors in predictions when no data is available. In stacking networks, we successively build a chain of networks, where the output at one step can act as a low-fidelity input for training the next step, gradually increasing the expressivity of the learned model. We will finally discuss the extension to domain decomposition using the finite basis method [5], including applications to newly-developed Kolmogorov-Arnold Networks [6]. Domain decomposition with PINNs, DeepONets, and KANs [7] significantly reduces the errors compared with using each method in the full domain. REFERENCES [1] Maziar Raissi, Paris Perdikaris, and George E. Karniadakis. "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations." Journal of Computational physics 378 (2019): 686-707. [2] Lu Lu, et al. "Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators." Nature machine intelligence 3.3 (2021): 218-229. [3] Amanda A. Howard, et al. "Multifidelity deep operator networks for data-driven and physics-informed problems." Journal of Computational Physics 493 (2023): 112462. [4] Amanda A. Howard, Sarah H. Murphy, Shady E. Ahmed, Panos Stinis. Stacked networks improve physics-informed training: Applications to neural networks and deep operator networks. Foundations of Data Science. doi: 10.3934/fods.2024029 [5] Ben Moseley, Andrew Markham, and Tarje Nissen-Meyer. "Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable domain decomposition approach for solving differential equations." Advances in Computational Mathematics 49.4 (2023): 62. [6] Ziming Liu, et al. "Kan: Kolmogorov-arnold networks." arXiv preprint arXiv:2404.19756 (2024). [7] Amanda A. Howard, et al. "Finite basis Kolmogorov-Arnold networks: domain decomposition for data-driven and physics-informed problems." arXiv preprint